Thursday, April 28, 2011

Fossil Hunter's guide to Mars



I worked directly with Charles Shults as he wrote the book A Fossil Hunter's Guide to Mars, critiquing everything, proofreading, and even came up with the name for the book. So, I am very glad to see that he's getting some mainstream media exposure for his work. When he first told me he had found fossils on Mars in the Spirit and Opportunity images, I was skeptical but at least willing to see the evidence. The clincher for me was this image, taken by the rover Opportunity on Sol 111. In the bottom left corner of that NASA image you see the image at the right. That rock exhibits fivefold radial symmetry. That is, if you rotate that rock in increments of 72 degrees the pattern facing us looks the same. There is no geological process that produces rocks of this kind, no mineral with this kind of symmetry occurs in nature (although such quasicrystals are possible in tightly controlled laboratory settings, they are microscopic). The only way for such a rock to ever appear is if a life form (such as a starfish) becomes fossilized. Therefore, fossils exist on Mars. Not convinced? You can buy the book and check the evidence (presented in exhaustive detail) for yourself.

Saturday, April 23, 2011

Spacex: a man on Mars in ten years

Here's an interview with Elon Musk by the Wall Street Journal. Time frame for a man on Mars? "Best case, ten years. Worst case, fifteen to twenty."

Thursday, April 21, 2011

Delta Robots in LEGO

I love LEGO. The title of this blog might indicate some interest in robots. A few days ago I posted some free software for simulating Canfield Joints in POVray.

So, it pleased me enormously to see the LEGO Delta Robots via Hack A Day.



A Delta Robot isn't exactly the same as a Canfield joint. The parallelograms formed by the arms force the end effector to have the same orientation as the base on top. In contrast, a Stewart Platform behaves more like a Canfield joint, in that the end effector's orientation can be different than the base.

The beauty of the Canfield joint is that two such joints connected in series can act as either a Delta Robot or a Stewart Platform, with loads evenly distributed no matter what the orientation.

Here are some more videos of commercial Delta Robots in action:



Sunday, April 17, 2011

Carnival of Space 193

Welcome to the 193rd edition of the Carnival of Space.

We'll start out the Carnival with the most exotic objects in the universe: Black Holes. Last week's host Vega 0.0 explains the main features of a Kerr black hole.

Discovery Space news tells of a new visualization tool that helps to model the extreme nature of spacetime in colliding black holes.

Is it possible for life to exist within a black hole? Next Big Future says that it's possible.

Moving to only slightly less exotic objects, astroblogger Ian Musgrave looks for Nibiru while explaining some practical astronomy.

On the Road to Endeavour there is an eerie synchronicity in two photos, one taken on Mars and the other on Earth.

Closer to home Next Big Future has news of Moon Express, a Silicon Valley startup building robots capable of mining the surface of the moon for precious metals and rare metallic elements.

In other space commerce news, Urban Astronomer cheers on SpaceX for winning the contract to launch the next generation of Iridium satellites.

For a bit of space history, Vintage Space talks about the Saturn V - its genesis and why it was "lost".

Finally, Steve's Astro Corner looks through a Galileo telescope at Saturn, with surprising results.

That's it for this week's Carnival of Space. Carnival #194 will be held at the Planetary Society blog; if you want to be involved you can either send your entries to Emily at PSB or enter them in the Carnival of Space articles spreadsheet at Google Docs.

Saturday, April 16, 2011

Canfield Joints in POVray

Last June, Kirk Sorenson wrote a blog post at Selenian Boondocks about Canfield Joints. These devices enable pointing a rocket nozzle or solar panel anywhere within a hemisphere.

Part of the problem he had in describing the Canfield Joint is due to the fact that the thing operates in three dimensions, making visualization difficult with 2-D images. Over the last several years I've been using POVray to do 3D designs and animations, and I realized that a character skeleton system I published last year (bones.inc) would also work well for Canfield Joints.

So, I did a little math (OK, a LOT of math) and came up with a short POVray program that can do 3D animation of Canfield joints. The program is included in the bones.inc zip file, and the results are shown in the video below.


Given the three base angles for a Canfield joint, the software automatically calculates all the rest of the angles and the position and orientation of the distal plate. An arbitrary number of Canfield joints can be linked in series and manipulated. Now anyone with POVray can design and visualize their own Canfield joints in operation.

Wednesday, April 06, 2011

Artificial Intelligence 101 - part 3

History of AI - from Golems to Dartmouth

It has been way too long (two years!) since I started this series, and I figure it's time to get back into it. For a refresher, here's part 1 : What is intelligence? and part 2: Why Artificial Intelligence?

What follows is by no means a definitive history of artificial intelligence. In fact, Wikipedia already has a very good entry on the history of artificial intelligence. Instead, this is a fairly brief history, with large gaps. I'm a bit more free to editorialize than Wikipedia.

distant past

Humanity has long searched for ways to get the benefits of human intelligence, without the requirements for actual humans. For most of human history, the solution was slavery - treating other human beings as if they were machines, and desiring only a small fraction of their mental potential.

Not only is slavery evil, it is also inefficient - slaves still require water, food, shelter, clothing, none of which is free. Automation of even the simplest sort is so vastly more efficient, along any metric one uses for comparison, that as soon as a task could be automated, it was.

The desire for intelligence in the inanimate has persisted through recorded history. The Golem is a fairly early example, and a cautionary tale as well. From Wikipedia:
The most famous golem narrative involves Judah Loew ben Bezalel, the late 16th century chief rabbi of Prague, also known as the Maharal, who reportedly created a golem to defend the Prague ghetto from anti-Semitic attacks and pogroms. Depending on the version of the legend, the Jews in Prague were to be either expelled or killed under the rule of Rudolf II, the Holy Roman Emperor. To protect the Jewish community, the rabbi constructed the Golem out of clay from the banks of the Vltava river, and brought it to life through rituals and Hebrew incantations. As this golem grew, it became increasingly violent, killing gentiles and spreading fear. A different story tells of a golem that fell in love, and when rejected, became the violent monster seen in most accounts. Some versions have the golem eventually turning on its creator or attacking other Jews.
The Zombie - no, not the Night of the Living Dead type, the Haitian voodoo kind - is a similar cultural expression of this desire for (partial) human intelligence animating human bodies. In both the cases of the Golem (an artificial creature) and the Haitian Zombie (a person enslaved through artificial chemical means and cultural expectations) the desire for control of a portion of human-like intelligence to perform tasks is evident.

Mary Shelley's Frankenstein is another example - a not-human (because he's dead) is re-animated under the belief that the mind could still function. Once again this fictional creation of artificial intelligence is a cautionary tale, as the creature turns on his creator. As an aside, it is also the inspiration for the subtitle to this blog.

In the late 18th century, the Turk chess-playing machine was a fraud that played off the desire for automated intelligence. A skilled chess player was hidden inside an elaborately constructed table with attached mechanical "opponent", operated by the player inside via levers and gears. So clever was the mechanism for hiding the chess master inside, and so great the desire (on the part of the audience) to believe that it was possible to automate intelligence with levers and gears, that the fraud persisted for decades.

the 1800s

In 1837, Charles Babbage developed the idea of a programmable computer, and a partial first sample of the Analytical Engine, which would have been the first Turing-complete computer had it been finished. Lady Ada Byron became the first computer programmer by writing a program to compute Bernoulli numbers on the Analytical engine. This was an important step, as it proved that some tasks which were once thought to be purely mental in nature could in fact be performed by machines - that calculation could be automated.

Then in 1854, George Boole developed a variation on elementary algebra called Boolean Algebra, operating solely on "truth values" of 0 or 1 rather than on all numbers. This Boolean algebra is the basis for all digital logic.

the 1930s

In 1937, Claude Shannon published his groundbreaking master's thesis, A Symbolic Analysis of Relay and Switching Circuits (available here). In that paper he showed that it was possible to use electromechanical relays to solve Boolean Algebra problems. In 1948 he introduced the idea of the "bit" as the smallest unit of information (among other ideas like information entropy) in the paper A Mathematical Theory of Communication (available here), which is itself the foundation of what we today call Information Theory. He showed that electronic circuits could perform logical operations, and that extraordinarily complex computations could be performed with electronics. Once again, what once had required the intelligence of a human could now be automated.

In 1936, Alan Turing described a thought experiment representing an automatic computing machine; this thought experiment has since become known as the "Turing Machine". In 1948 Turing described the idea as:
...an infinite memory capacity obtained in the form of an infinite tape marked out into squares, on each of which a symbol could be printed. At any moment there is one symbol in the machine; it is called the scanned symbol. The machine can alter the scanned symbol and its behavior is in part determined by that symbol, but the symbols on the tape elsewhere do not affect the behavior of the machine. However, the tape can be moved back and forth through the machine, this being one of the elementary operations of the machine. Any symbol on the tape may therefore eventually have an innings.
Although the physical details are different, the description is identical to the operation of any computer today. Instead of a physical tape being fed through the machine, a memory address is polled and the resulting "symbol" consists of a pattern of low and high voltages on wires, what we think of as the Zeros and Ones in a byte.

If a Turing machine is capable of emulating any other Turing machine, then it is considered a Universal Turing Machine. This led to the idea of a stored-program computer - the stored program being the translation software operating in the background. Indeed, most computers today can emulate pretty much any other computer, and the process usually only requires the appropriate software.

We're not done with Alan Turing yet. Besides groundbreaking work in computational theory, he also turned his attention to artificial intelligence. He devised a thought experiment to determine whether an artificial device was actually intelligent. The Turing Test consists of a human, an AI, and a human judge. The judge has a conversation with both the AI and the human through a teletype arrangement - what we would recognize today as a chat window - and tries to decide which one is the human. If the judge can't figure it out solely from the conversation in chat, then the AI is considered intelligent, according to the Turing Test. It isn't a perfect test, but at least it was a first attempt to evaluate the quality of our efforts towards developing artificial intelligence.

So to recap: there is a historical desire for at least (or only) a portion of human intelligence to perform tasks; it was shown that the task of calculation can be automated, mechanically; calculation and complex logical operations can also be automated with electronics; calculating machines with stored programs can emulate (or simulate) other calculating machines; and it is possible to test a machine for intelligence, at least to a first-order approximation.

the birth of AI as a field of study

The 1956 Dartmouth Conference is generally regarded as the birth of AI. Indeed, the term "artificial intelligence" was coined for the conference and came to be accepted as the name for the new field of study at that conference. The Dartmouth Conference led to an explosion of work in the field that continued until about 1974.

The Dartmouth Conference is a pretty good place to stop this section. The next three parts of this series will look at three approaches to artificial intelligence: neural networks, fuzzy cognitive maps, and genetic algorithms.